Noise robustness and stochastic tolerance of OT error-driven ranking algorithms
نویسنده
چکیده
Recent counterexamples show that Harmonic Grammar (HG) error-driven learning (with the classical Perceptron reweighing rule) is not robust to noise and does not tolerate the stochastic implementation (Magri 2014, MS). This article guarantees that no analogous counterexamples are possible for proper Optimality Theory (OT) error-driven learners. In fact, a simple extension of the OT convergence analysis developed in the literature (Tesar and Smolensky 1998, Linguist. Inq., 29, 229–268; Boersma 2009, Linguist. Inq., 40, 667–686; Magri 2012, Phonology, 29, 213–269) is shown to ensure stochastic tolerance and noise robustness of the OT learner. Implications for the comparison between the HG and OT implementations of constraint-based phonology are discussed.
منابع مشابه
Error-driven Learning in Ot and Hg: a Comparison
The OT error-driven learner is known to admit guarantees of efficiency, stochastic tolerance and noise robustness which hold independently of any substantive assumptions on the constraints. This paper shows that the HG learner instead does not admit such constraint-independent guarantees. The HG theory of error-driven learning thus needs to be substantially restricted to specific constraint sets.
متن کاملConvergence of Error-driven Ranking Algorithms
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. This learning model is very popular in the OT acquisition literature, in particular because it predicts a sequence of rankings that models gradualness in ch...
متن کاملComputational Method for Fractional-Order Stochastic Delay Differential Equations
Dynamic systems in many branches of science and industry are often perturbed by various types of environmental noise. Analysis of this class of models are very popular among researchers. In this paper, we present a method for approximating solution of fractional-order stochastic delay differential equations driven by Brownian motion. The fractional derivatives are considered in the Caputo sense...
متن کاملStochastic Optimization for Large-scale Optimal Transport
Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale OT problems. These methods can handle arbitrary distributions (either discrete or continuous) as lo...
متن کاملError-driven Learning in Harmonic Grammar
The HG literature has adopted so far the Perceptron reweighing rule because of its convergence guarantees. Yet, this rule is not suited to HG, as it fails at ensuring non-negativity of the weights. The first contribution of this paper is a solution to this impasse. I consider a variant of the Perceptron which truncates any update at zero, thus maintaining the weights non-negative in a principle...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- J. Log. Comput.
دوره 26 شماره
صفحات -
تاریخ انتشار 2016